Goto

Collaborating Authors

 candidate model


Early-stopped aggregation: Adaptive inference with computational efficiency

Ohn, Ilsang, Fan, Shitao, Jun, Jungbin, Lin, Lizhen

arXiv.org Machine Learning

When considering a model selection or, more generally, an aggregation approach for adaptive statistical inference, it is often necessary to compute estimators over a wide range of model complexities including unnecessarily large models even when the true data-generating process is relatively simple, due to the lack of prior knowledge. This requirement can lead to substantial computational inefficiency. In this work, we propose a novel framework for efficient model aggregation called the early-stopped aggregation (ESA): instead of computing and aggregating estimators for all candidate models, we compute only a small number of simpler ones using an early-stopping criterion and aggregate only these for final inference. Our framework is versatile and applies to both Bayesian model selection, in particular, within the variational Bayes framework, and frequentist estimation, including a general penalized estimation setting. We investigate adaptive optimal property of the ESA approach across three learning paradigms. We first show that ESA achieves optimal adaptive contraction rates in the variational Bayes setting under mild conditions. We extend this result to variational empirical Bayes, where prior hyperparameters are chosen in a data-dependent manner. In addition, we apply the ESA approach to frequentist aggregation including both penalization-based and sample-splitting implementations, and establish corresponding theory. As we demonstrate, there is a clear unification between early-stopped Bayes and frequentist penalized aggregation, with a common "energy" functional comprising a data-fitting term and a complexity-control term that drives both procedures. We further present several applications and numerical studies that highlight the efficiency and strong performance of the proposed approach.


Pseudo-Labeling for Unsupervised Domain Adaptation with Kernel GLMs

Weill, Nathan, Wang, Kaizheng

arXiv.org Machine Learning

We propose a principled framework for unsupervised domain adaptation under covariate shift in kernel Generalized Linear Models (GLMs), encompassing kernelized linear, logistic, and Poisson regression with ridge regularization. Our goal is to minimize prediction error in the target domain by leveraging labeled source data and unlabeled target data, despite differences in covariate distributions. We partition the labeled source data into two batches: one for training a family of candidate models, and the other for building an imputation model. This imputation model generates pseudo-labels for the target data, enabling robust model selection. We establish non-asymptotic excess-risk bounds that characterize adaptation performance through an "effective labeled sample size", explicitly accounting for the unknown covariate shift. Experiments on synthetic and real datasets demonstrate consistent performance gains over source-only baselines.


TowardsReliableModelSelectionforUnsupervised DomainAdaptation: AnEmpiricalStudyandA CertifiedBaseline

Neural Information Processing Systems

Existing approaches can be categorized into two types. The first type involves leveraging labeled source data for target-domain model selection [9,14-16]. The second type designs unsupervised metrics based on priors of the learned target-domain structure and utilizes the metrics for model selection[17,19,18,20].






0c72cb7ee1512f800abe27823a792d03-Supplemental.pdf

Neural Information Processing Systems

However, for the recommender system experiment, there are no natural representations for the candidate models. IS-g/DR-g Off-policy evaluation (OPE) methods can provide an estimate of the accumulative metric. The resulting methods aredenoted asIS-EI andDR-EIrespectively. Asthere arelimited information tobegained byrepeatedly deploying thesame model online, we exclude the models that have been deployed when choosing the next model to deploy for all the methodsincludingAOE. We simulate the "online" deployment scenario as follows: a multi-class classifier is given a set of inputs; for each input, the classifier returns a prediction of the label and only a binary immediate feedback about whether the predicted class is correct is available. They-axisshowsthe gap in the accumulativemetric between the optimal model and the estimated best model by each method.